-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add parallel test running to the framework. #1664
base: main
Are you sure you want to change the base?
Conversation
Reviewing. |
failures, previous_test, test_number = pool.starmap(run_specific_test, zip(tests, repeat([failures, previous_test, test_number]))) | ||
else: | ||
for test_itr in tests: | ||
failures, previous_test, test_number = run_specific_test(test_itr, [failures, previous_test, test_number]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the code, to run in parallel, the word "RUN_PARALLEL" is expected in the arg list. But this causes errors right at the beginning because it is not ignored like other args in lines 337-340 (in changed file).
So the following should be added right after:
if re.search("RUN_PARALLEL", arg):
continue
Second, after the above is fixed, it still can't run (at least in windows). Following errors are seen: "An attempt has been made to start a new process before the current process has finished its bootstrapping phase."
This can be fixed by wrapping the calling of the main execution function (run_tests) and maybe surrounding code in a "main" function and then calling it as below:
if name=="main":
main()
After this, the tests run to completion (for a small subset of tests (6)) but "ValueError: too many values to unpack" is seen at the end.
The test specific stdout is also all mixed up due to tests running in parallel.
The help message should also be updated to mention the new "RUN_PARALLEL" argument
Looks like change needs some re-work before it can be reviewed again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the review, working on a fix now. I've been told that python test harness is not the most up to date method of running the tests. I am looking to abandon this in favor of fixing it in the proper place.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we remove the focused review tag on this one if we are planning a different fix?
Removing "focused review" for now - not quite ready yet. |
python parallel test updates.